AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Trillion-token Pretraining

# Trillion-token Pretraining

Dots.llm1.inst
MIT
dots.llm1 is a large-scale MoE model that activates 14 billion parameters out of a total of 142 billion parameters, and its performance is comparable to that of the state-of-the-art models.
Large Language Model Transformers Supports Multiple Languages
D
rednote-hilab
440
97
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase